将语义解析器定位以支持新语言需要有效的跨语性概括。最近的工作发现了机器翻译或零击方法的成功,尽管这些方法可能难以模拟母语人士如何提出问题。我们考虑如何有效利用新语言的最小注释示例来进行几次跨语性语义解析。我们引入了一阶元学习算法,以在跨语性转移过程中训练具有最大样品效率的语义解析器。我们的算法使用高资源语言来训练解析器,并同时优化低资源语言的跨语性概括。 ATIS上六种语言的结果表明,我们的泛化步骤的组合产生了准确的语义解析器,以每种新语言中的源培训数据$ 10%的$ 10%。我们的方法还使用英语对蜘蛛的竞争模型进行训练,并将其推广到中文,同样对$ 10%的培训数据进行了采样。
translated by 谷歌翻译
Abstractive summarization has enjoyed renewed interest in recent years, thanks to pre-trained language models and the availability of large-scale datasets. Despite promising results, current models still suffer from generating factually inconsistent summaries, reducing their utility for real-world application. Several recent efforts attempt to address this by devising models that automatically detect factual inconsistencies in machine generated summaries. However, they focus exclusively on English, a language with abundant resources. In this work, we leverage factual consistency evaluation models to improve multilingual summarization. We explore two intuitive approaches to mitigate hallucinations based on the signal provided by a multilingual NLI model, namely data filtering and controlled generation. Experimental results in the 45 languages from the XLSum dataset show gains over strong baselines in both automatic and human evaluation.
translated by 谷歌翻译
We consider the problem of automatically generating stories in multiple languages. Compared to prior work in monolingual story generation, crosslingual story generation allows for more universal research on story planning. We propose to use Prompting Large Language Models with Plans to study which plan is optimal for story generation. We consider 4 types of plans and systematically analyse how the outputs differ for different planning strategies. The study demonstrates that formulating the plans as question-answer pairs leads to more coherent generated stories while the plan gives more control to the story creators.
translated by 谷歌翻译
Automatic machine translation (MT) metrics are widely used to distinguish the translation qualities of machine translation systems across relatively large test sets (system-level evaluation). However, it is unclear if automatic metrics are reliable at distinguishing good translations from bad translations at the sentence level (segment-level evaluation). In this paper, we investigate how useful MT metrics are at detecting the success of a machine translation component when placed in a larger platform with a downstream task. We evaluate the segment-level performance of the most widely used MT metrics (chrF, COMET, BERTScore, etc.) on three downstream cross-lingual tasks (dialogue state tracking, question answering, and semantic parsing). For each task, we only have access to a monolingual task-specific model. We calculate the correlation between the metric's ability to predict a good/bad translation with the success/failure on the final task for the Translate-Test setup. Our experiments demonstrate that all metrics exhibit negligible correlation with the extrinsic evaluation of the downstream outcomes. We also find that the scores provided by neural metrics are not interpretable mostly because of undefined ranges. Our analysis suggests that future MT metrics be designed to produce error labels rather than scores to facilitate extrinsic evaluation.
translated by 谷歌翻译
Compositional generalization is a basic mechanism in human language learning, which current neural networks struggle with. A recently proposed Disentangled sequence-to-sequence model (Dangle) shows promising generalization capability by learning specialized encodings for each decoding step. We introduce two key modifications to this model which encourage more disentangled representations and improve its compute and memory efficiency, allowing us to tackle compositional generalization in a more realistic setting. Specifically, instead of adaptively re-encoding source keys and values at each time step, we disentangle their representations and only re-encode keys periodically, at some interval. Our new architecture leads to better generalization performance across existing tasks and datasets, and a new machine translation benchmark which we create by detecting naturally occurring compositional patterns in relation to a training set. We show this methodology better emulates real-world requirements than artificial challenges.
translated by 谷歌翻译
提取性摘要通过识别和串联文档中最重要的句子来产生摘要。由于大多数摘要数据集都没有带有指示文档句子是否值得摘要的黄金标签,因此已经提出了不同的标签算法来推断甲骨文提取物进行模型培训。在这项工作中,我们以广泛使用的贪婪标签方法来识别两个缺陷:它提供了次优和确定性的甲骨文。为了减轻这两个问题,我们提出了一种简单而有效的标签算法,该算法会产生柔和的,基于期望的句子标签。我们为提取性摘要定义了一个新的学习目标,该目标将来自多个Oracle摘要的学习信号结合在一起,并证明这等同于估计每个文档句子的Oracle期望。在没有任何架构修改的情况下,提议的标签方案在跨域和语言的各种摘要基准上都在监督和零击设置中取得了卓越的性能。
translated by 谷歌翻译
传达相关和忠实信息的能力对于有条件生成的许多任务至关重要,但对于神经SEQ-seq seq模型仍然难以捉摸,这些模型的输出通常显示出幻觉,并且无法正确涵盖重要细节。在这项工作中,我们主张规划作为有用的中间表示,以使有条件的一代减少不透明和扎根。我们的作品提出了将文本计划作为一系列提问(QA)对的新概念化。我们用QA蓝图作为内容选择(即〜说什么)和计划(即〜按什么顺序)来增强现有数据集(例如,用于摘要)。我们通过利用最先进的问题生成技术并将输入输出对自动获取蓝图,并将其转换为输入 - 蓝图输出输出元组。我们开发了基于变压器的模型,每个模型都在它们如何将蓝图合并到生成的输出中(例如,作为全局计划或迭代)。跨指标和数据集的评估表明,蓝图模型比不采取计划并允许对生成输出进行更严格控制的替代方案更为事实。
translated by 谷歌翻译
离散图形模型的状态空间的规模对于深度学习时代的模型能力至关重要。基于动态编程(DP)的推断通常使用少量状态(通常小于数百个)。在这项工作中,我们提出了一系列随机动态编程(RDP)算法,用于将结构化模型缩放到成千上万的潜在状态。我们的方法广泛适用于基于经典的DP的推断(分区,边缘,重物,熵,.ETC)和不同的图形结构(链条,树木,更一般的超图)。它还与自动分化兼容,因此可以与神经网络无缝集成,并使用基于梯度的优化器学习。我们的核心技术是随机化,它是限制和重新重量DP在小型节点的小型子集上,导致计算级数的计算。我们进一步实现了利用RAO-Blackwellization和Implance采样的低偏差和差异。不同图表不同推论的实验证明了我们方法的准确性和效率。此外,使用RDP培训缩放结构VAE时,它在测试可能性方面优于基线,并且成功地防止后塌陷。
translated by 谷歌翻译
电影拖车执行多种功能:他们向故事介绍了观众,传达了电影的情绪和艺术风格,并鼓励受众看电影。这些不同的功能使自动拖车产生充满挑战的努力。我们将其分解为两个小组:叙事结构识别和情绪预测。我们将电影作为图形,其中节点是截图,边缘表示它们之间的语义关系。我们使用联合对比培训学习这些关系,该联合对比培训利用剧本绘制的特权文本信息(例如,字符,措施,情况)。然后,无监督算法将遍历图,并生成人类法官更喜欢通过竞争监督方法产生的拖车。
translated by 谷歌翻译
Bidirectional Encoder Representations from Transformers (BERT; Devlin et al. 2019) represents the latest incarnation of pretrained language models which have recently advanced a wide range of natural language processing tasks. In this paper, we showcase how BERT can be usefully applied in text summarization and propose a general framework for both extractive and abstractive models. We introduce a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several intersentence Transformer layers. For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not). We also demonstrate that a two-staged fine-tuning approach can further boost the quality of the generated summaries. Experiments on three datasets show that our model achieves stateof-the-art results across the board in both extractive and abstractive settings. 1
translated by 谷歌翻译